1,482 research outputs found

    Topos-Theoretic Extension of a Modal Interpretation of Quantum Mechanics

    Full text link
    This paper deals with topos-theoretic truth-value valuations of quantum propositions. Concretely, a mathematical framework of a specific type of modal approach is extended to the topos theory, and further, structures of the obtained truth-value valuations are investigated. What is taken up is the modal approach based on a determinate lattice \Dcal(e,R), which is a sublattice of the lattice \Lcal of all quantum propositions and is determined by a quantum state ee and a preferred determinate observable RR. Topos-theoretic extension is made in the functor category \Sets^{\CcalR} of which base category \CcalR is determined by RR. Each true atom, which determines truth values, true or false, of all propositions in \Dcal(e,R), generates also a multi-valued valuation function of which domain and range are \Lcal and a Heyting algebra given by the subobject classifier in \Sets^{\CcalR}, respectively. All true propositions in \Dcal(e,R) are assigned the top element of the Heyting algebra by the valuation function. False propositions including the null proposition are, however, assigned values larger than the bottom element. This defect can be removed by use of a subobject semi-classifier. Furthermore, in order to treat all possible determinate observables in a unified framework, another valuations are constructed in the functor category \Sets^{\Ccal}. Here, the base category \Ccal includes all \CcalR's as subcategories. Although \Sets^{\Ccal} has a structure apparently different from \Sets^{\CcalR}, a subobject semi-classifier of \Sets^{\Ccal} gives valuations completely equivalent to those in \Sets^{\CcalR}'s.Comment: LaTeX2

    Introducing pattern graph rewriting in novel spatial aggregation procedures for a class of traffic assignment models

    Get PDF
    In this study two novel spatial aggregation methods are presented compatible with a class of traffic assignment models. Both methods are formalized using a category theoretical approach. While this type of formalization is new to the field of transport, it is well known in other fields that require tools to allow for reasoning on complex structures. The method presented stems from a method originally developed to deal with quantum physical processes. The first benefit of adopting this formalization technique is that it provides an intuitive graphical representation while having a rigorous mathematical underpinning. Secondly, it bears close resemblances to regular expressions and functional programming techniques giving insights in how to potentially construct solvers (i.e. algorithms). The aggregation methods proposed in this paper are compatible with traffic assignment procedures utilising a path travel time function consisting out of two components, namely (i) a flow invariant component representing free flow travel time, and (ii) a flow dependent component representing queuing delays. By exploiting the fact that, in practice, most large scale networks only have a small portion of the network exhibiting queuing delays, this method aims at decomposing the network into a constant free flowing part to compute once and a, much smaller, demand varying delay part that requires recomputation across demand scenarios. It is demonstrated that under certain conditions this procedure is lossless. On top of the decomposition method, a path set reduction method is proposed. This method reduces the path set to the minimal path set which further decreases computational cost. A large scale case study is presented to demonstrate the proposed methods can reduce computation times to less than 5% of the original without loss of accuracy

    An efficient event‐based algorithm for solving first order dynamic network loading problems

    Get PDF
    In this paper we will present a novel solution algorithm for the Generalised Link Transmission Model (G-LTM). It will utilise a truly event based approach supporting the generation of exact results, unlike its time discretised counterparts. Furthermore, it can also be configured to yield approximate results, when this approach is adopted its computational complexity decreases dramatically. It will be demonstrated on a theoretical as well as a real world network that when utilising fixed periods of stationary demands to mimic departure time demand fluctuations, this novel approach can be efficient while maintaining a high level of result accuracy. The link model is complemented by a generic node model formulation yielding a proper generic first order DNL solution algorithm

    A lossless spatial aggregation procedure for a class of capacity constrained traffic assignment models incorporating point queues

    Get PDF
    In this paper two novel spatial aggregation procedures are proposed. A network aggregation procedure based on a travel time delay decomposition method and a zonal aggregation procedure based on a path redistribution scheme. The effectiveness of these procedures lies in the fact that they, unlike existing aggregation methods, exploit available information regarding the application context and the characteristics of the adopted traffic assignment procedure. The context considered involves all applications that require path and inter-zonal travel times as output. A typical example of such applications are quick-scan methods, which have become increasing popular in recent years. The proposed procedures are compatible with a class of traffic assignment procedures incorporating (residual) point queues. Furthermore, one can choose to combine network aggregation with zonal aggregation to increase the effectiveness of the procedure. Results are demonstrated via theoretical examples as well as a large-scale case study. In the case study it is shown that network loading times can be reduced to as little as 4% of the original situation without suffering any information loss

    A network science approach to analysing manufacturing sector supply chain networks: Insights on topology

    Get PDF
    Due to the increasingly complex nature of the modern supply chain networks (SCNs), a recent research trend has focussed on modelling SCNs as complex adaptive systems. Despite the substantial number of studies devoted to such hypothetical modelling efforts, studies analysing the topological properties of real world SCNs have been relatively rare, mainly due to the scarcity of data. This paper aims to analyse the topological properties of twenty-six SCNs from the manufacturing sector. Moreover, this study aims to establish a general set of topological characteristics that can be observed in real world SCNs from the manufacturing sector, so that future theoretical work modelling the growth of SCNs in this sector can mimic these observations. It is found that the manufacturing sector SCNs tend to be scale free with degree exponents below two, tending towards hub and spoke configuration, as opposed to most other scale-free networks which have degree exponents above two. This observation becomes significant, since the importance of the degree exponent threshold of two in shaping the growth process of networks is well understood in network science. Other observed topological characteristics of the SCNs include disassortative mixing (in terms of node degree as well as node characteristics) and high modularity. In some networks, we find that node centrality is strongly correlated with the value added by each node to the supply chain. Since the growth mechanism that is most widely used to model the evolution of SCNs, the Barabasi - Albert model, does not generate scale-free topologies with degree exponent below two, it is concluded that a novel mechanism to model the growth of SCNs is required to be developed

    Specifying and Verifying Concurrent Algorithms with Histories and Subjectivity

    Full text link
    We present a lightweight approach to Hoare-style specifications for fine-grained concurrency, based on a notion of time-stamped histories that abstractly capture atomic changes in the program state. Our key observation is that histories form a partial commutative monoid, a structure fundamental for representation of concurrent resources. This insight provides us with a unifying mechanism that allows us to treat histories just like heaps in separation logic. For example, both are subject to the same assertion logic and inference rules (e.g., the frame rule). Moreover, the notion of ownership transfer, which usually applies to heaps, has an equivalent in histories. It can be used to formally represent helping---an important design pattern for concurrent algorithms whereby one thread can execute code on behalf of another. Specifications in terms of histories naturally abstract granularity, in the sense that sophisticated fine-grained algorithms can be given the same specifications as their simplified coarse-grained counterparts, making them equally convenient for client-side reasoning. We illustrate our approach on a number of examples and validate all of them in Coq.Comment: 17 page

    Quantum models of classical mechanics: maximum entropy packets

    Get PDF
    In a previous paper, a project of constructing quantum models of classical properties has been started. The present paper concludes the project by turning to classical mechanics. The quantum states that maximize entropy for given averages and variances of coordinates and momenta are called ME packets. They generalize the Gaussian wave packets. A non-trivial extension of the partition-function method of probability calculus to quantum mechanics is given. Non-commutativity of quantum variables limits its usefulness. Still, the general form of the state operators of ME packets is obtained with its help. The diagonal representation of the operators is found. A general way of calculating averages that can replace the partition function method is described. Classical mechanics is reinterpreted as a statistical theory. Classical trajectories are replaced by classical ME packets. Quantum states approximate classical ones if the product of the coordinate and momentum variances is much larger than Planck constant. Thus, ME packets with large variances follow their classical counterparts better than Gaussian wave packets.Comment: 26 pages, no figure. Introduction and the section on classical limit are extended, new references added. Definitive version accepted by Found. Phy

    A unified framework for traffic assignment: deriving static and quasi‐dynamic models consistent with general first order dynamic traffic assignment models

    Get PDF
    This paper presents a theoretical framework to derive static, quasi-dynamic, and semi-dynamic traffic assignment models from a general first order dynamic traffic assignment model. By explicit derivation from a dynamic model, the resulting models maintain maximum consistency with dynamic models. Further, the derivations can be done with any fundamental diagram, any turn flow restrictions, and deterministic or stochastic route choice. We demonstrate the framework by deriving static (quasidynamic) models that explicitly take queuing and spillback into account. These models are generalisations of models previously proposed in the literature. We further discuss all assumptions usually implicitly made in the traditional static traffic assignment model

    Capacity constrained stochastic static traffic assignment with residual point queues incorporating a proper node model

    Get PDF
    Static traffic assignment models are still widely applied for strategic transport planning purposes in spite of the fact that such models produce implausible traffic flows that exceed link capacities and predict incorrect congestion locations. There have been numerous attempts in the literature to add capacity constraints to obtain more realistic traffic flows and bottleneck locations, but so far there has not been a satisfactory model formulation. After reviewing the literature, we come to the conclusion that an important piece of the puzzle has been missing so far, namely the inclusion of a proper node model. In this paper we propose a novel path-based static traffic assignment model for finding a stochastic user equilibrium in which we include a first order node model that yields realistic turn capacities, which are then used to determine consistent traffic flows and residual point queues. The route choice part of the model is specified as a variational inequality problem, while the network loading part is formulated as a fixed point problem. Both problems are solved using existing techniques. We illustrate the model using hypothetical examples, and also demonstrate feasibility on large-scale networks

    An improved approach to characterize potash-bearing evaporite deposits, evidenced in North Yorkshire, United Kingdom

    Get PDF
    Traditionally, potash mineral deposits have been characterized using downhole geophysical logging in tandem with geochemical analysis of core samples to establish the critical potassium (% K2O) content. These techniques have been employed in a recent exploration study of the Permian evaporite succession of North Yorkshire, United Kingdom, but the characterization of these complex deposits has been led by mineralogical analysis, using quantitative X-ray diffraction (QXRD). The novel QXRD approach provides data on K content with the level of confidence needed for reliable reporting of resources and also identifies and quantifies more precisely the nature of the K-bearing minerals. Errors have also been identified when employing traditional geochemical approaches for this deposit, which would have resulted in underestimated potash grades. QXRD analysis has consistently identified polyhalite (K2Ca2Mg(SO4)4·2(H2O) in the Fordon (Evaporite) Formation and sylvite (KCl) in the Boulby Potash and Sneaton Potash members as the principal K-bearing host minerals in North Yorkshire. However, other K hosts, including kalistrontite (K2Sr(SO4)2) a first recorded occurrence in the UK, and a range of boron-bearing minerals have also been detected. Application of the QXRD-led characterization program across the evaporitic basin has helped to produce a descriptive, empirical model for the deposits, including the polyhalite-bearing Shelf and Basin seams and two, newly discovered sylvite-bearing bittern salt horizons, the Pasture Beck and Gough seams. The characterization program has enabled a polyhalite mineral inventory in excess of 2.5 billion metric tons (Bt) to be identified, suggesting that this region possesses the world’s largest known resource of polyhalite. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed
    corecore